25 research outputs found

    A possible rheological model of gum candies

    Get PDF
    An appropriate rheological model can be used in production of good quality gum candy required by consumers. For this purpose Creep-Recovery Test (CRT) curves were recorded with a Stable Micro System TA.XT-2 precision texture analyser with 75 mm diameter cylinder probe on gum candies purchased from the local market. The deformation speed was 0.2 mm s−1, the creeping- and recovering time was 60 s, while the loading force was set to 1 N, 2 N, 5 N, 7 N, and 10 N. The two-element Kelvin-Voigt-model, a three-element model, and the four-element Burgers-model were fitted on the recorded creep data, and then the parameters of the models were evaluated. The best fitting from the used models was given by the Burgers model

    Lines, Circles, Planes and Spheres

    Full text link
    Let SS be a set of nn points in R3\mathbb{R}^3, no three collinear and not all coplanar. If at most n−kn-k are coplanar and nn is sufficiently large, the total number of planes determined is at least 1+k(n−k2)−(k2)(n−k2)1 + k \binom{n-k}{2}-\binom{k}{2}(\frac{n-k}{2}). For similar conditions and sufficiently large nn, (inspired by the work of P. D. T. A. Elliott in \cite{Ell67}) we also show that the number of spheres determined by nn points is at least 1+(n−13)−t3orchard(n−1)1+\binom{n-1}{3}-t_3^{orchard}(n-1), and this bound is best possible under its hypothesis. (By t3orchard(n)t_3^{orchard}(n), we are denoting the maximum number of three-point lines attainable by a configuration of nn points, no four collinear, in the plane, i.e., the classic Orchard Problem.) New lower bounds are also given for both lines and circles.Comment: 37 page

    On the strength of the finite intersection principle

    Full text link
    We study the logical content of several maximality principles related to the finite intersection principle (F\IP) in set theory. Classically, these are all equivalent to the axiom of choice, but in the context of reverse mathematics their strengths vary: some are equivalent to \ACA over \RCA, while others are strictly weaker, and incomparable with \WKL. We show that there is a computable instance of F\IP all of whose solutions have hyperimmune degree, and that every computable instance has a solution in every nonzero c.e.\ degree. In terms of other weak principles previously studied in the literature, the former result translates to F\IP implying the omitting partial types principle (OPT\mathsf{OPT}). We also show that, modulo ÎŁ20\Sigma^0_2 induction, F\IP lies strictly below the atomic model theorem (AMT\mathsf{AMT}).Comment: This paper corresponds to section 3 of arXiv:1009.3242, "Reverse mathematics and equivalents of the axiom of choice", which has been abbreviated and divided into two pieces for publicatio

    Depth, Highness and DNR Degrees

    Get PDF
    A sequence is Bennett deep [5] if every recursive approximation of the Kolmogorov complexity of its initial segments from above satisfies that the difference between the approximation and the actual value of the Kolmogorov complexity of the initial segments dominates every constant function. We study for different lower bounds r of this difference between approximation and actual value of the initial segment complexity, which properties the corresponding r(n)-deep sets have. We prove that for r(n) = Δn, depth coincides with highness on the Turing degrees. For smaller choices of r, i.e., r is any recursive order function, we show that depth implies either highness or diagonally-non-recursiveness (DNR). In particular, for left-r.e. sets, order depth already implies highness. As a corollary, we obtain that weakly-useful sets are either high or DNR. We prove that not all deep sets are high by constructing a low order-deep set. Bennett's depth is defined using prefix-free Kolmogorov complexity. We show that if one replaces prefix-free by plain Kolmogorov complexity in Bennett's depth definition, one obtains a notion which no longer satisfies the slow growth law (which stipulates that no shallow set truth-table computes a deep set); however, under this notion, random sets are not deep (at the unbounded recursive order magnitude). We improve Bennett's result that recursive sets are shallow by proving all K-trivial sets are shallow; our result is close to optimal. For Bennett's depth, the magnitude of compression improvement has to be achieved almost everywhere on the set. Bennett observed that relaxing to infinitely often is meaningless because every recursive set is infinitely often deep. We propose an alternative infinitely often depth notion that doesn't suffer this limitation (called i.o. depth).We show that every hyperimmune degree contains a i.o. deep set of magnitude Δn, and construct a π01- class where every member is an i.o. deep set of magnitude Δn. We prove that every non-recursive, non-DNR hyperimmune-free set is i.o. deep of constant magnitude, and that every nonrecursive many-one degree contains such a set

    Rainfall and river flow ensemble verification: prototype framework and demonstration

    No full text
    This report presents work undertaken by CEH and the Met Office under the “Rainfall and River Flow Ensemble Verification” project commissioned by the Flood Forecasting Centre on behalf of the Scottish Environment Protection Agency, Environment Agency and Natural Resources Wales. A Prototype Framework for joint river flow and precipitation ensemble verification is developed and its use demonstrated on example verification periods and case study storms. This Framework constitutes a set of recommended metrics (scores and diagrams) for verifying ensemble forecasts of precipitation and river flow along with consideration of their application to the forecast and observation datasets involved. The work provides a foundation for the follow-on FCERM R&D Project SC150016 entitled “Improving confidence in Flood Guidance through verification of rainfall and river flow ensembles”. The proposed Prototype Framework is presented, with details given of the chosen verification metrics along with supporting data requirements. Aspects requiring coordination between the hydrological and meteorological components of the study are discussed, including the use of thresholds, accumulation periods for precipitation, and lead-time considerations. Sources of precipitation verification truth data (radar- and raingauge-based) and their effect on the verification analyses are explored and the Rank Histogram found to be particularly sensitive. Daily precipitation accumulations are used to obtain an upper-bound on precipitation forecast skill, whilst hourly accumulations expose the effect of timing uncertainties, and allow a closer link to the river flow verification made at a 15 minute time-step. To provide an overview of ensemble performance, verification statistics are calculated at national and regional scales. Overall, for the 32 day verification periods considered here, it is found that probabilistic forecasts derived from both the river flow and the precipitation accumulation ensembles tend to be over-confident (the probabilities are higher than what the observed frequency of occurrence suggest). Whilst the river flow ensemble is under-spread according to the Rank Histogram, the outcome is less clear for the precipitation ensembles and dependent on truth type used. Considering the ROC Skill Score, both river flow and precipitation accumulation ensembles show good potential skill. Threshold-based verification scores are regionally dependent for both river flow and precipitation accumulations, although the details of these dependencies vary with ensemble type and verification metric. Overall, the ensemble skill performs worse in the drier regions towards the southeast of England. Maps of verification scores for individual catchment sites provide a visual overview of ensemble performance. However, using information only for an individual catchment, particularly for rare events, is found to give analyses dominated by sampling uncertainty. This is the case for river flow when using return period thresholds of interest for flood forecasting. To obtain a meaningful verification analysis at the scale of individual catchments, river flow data are pooled by catchment area within a given region. A prototype ensemble verification site performance summary, containing relevant verification information for a specific site of interest is presented. This summary brings together different verification scores and diagrams for both river flow and precipitation accumulations. Verification information from each site summary can be incorporated directly into the forecasting process. Possible methods of displaying this information are presented, with examples given for river flow over specific flood-producing case-study storms. These displays are shown alongside those placing the ensemble uncertainty in the context of the climatological ensemble spread. Thus it is demonstrated how, with appropriate understanding of sampling uncertainties, relevant verification information can be used to give an informed interpretation of the quantitative likelihood from an ensemble forecast. The main findings of the study are summarised as a set of key conclusions with sampling uncertainty identified as a major consideration for meaningful verification: influenced by threshold choice, period length, and flooding regime. Building on the Prototype Framework, and making use of the demonstration verification findings, recommendations are made for the possible characteristics of an ensemble verification system. As a foundation report guiding a follow-on R&D project of greater depth, areas requiring further consideration are identified. These aim to align future research to help develop robust and effective verification systems having real operational value to flood forecasting, guidance and warning
    corecore